82 research outputs found

    An autoencoder compression approach for accelerating large-scale inverse problems

    Full text link
    PDE-constrained inverse problems are some of the most challenging and computationally demanding problems in computational science today. Fine meshes that are required to accurately compute the PDE solution introduce an enormous number of parameters and require large scale computing resources such as more processors and more memory to solve such systems in a reasonable time. For inverse problems constrained by time dependent PDEs, the adjoint method that is often employed to efficiently compute gradients and higher order derivatives requires solving a time-reversed, so-called adjoint PDE that depends on the forward PDE solution at each timestep. This necessitates the storage of a high dimensional forward solution vector at every timestep. Such a procedure quickly exhausts the available memory resources. Several approaches that trade additional computation for reduced memory footprint have been proposed to mitigate the memory bottleneck, including checkpointing and compression strategies. In this work, we propose a close-to-ideal scalable compression approach using autoencoders to eliminate the need for checkpointing and substantial memory storage, thereby reducing both the time-to-solution and memory requirements. We compare our approach with checkpointing and an off-the-shelf compression approach on an earth-scale ill-posed seismic inverse problem. The results verify the expected close-to-ideal speedup for both the gradient and Hessian-vector product using the proposed autoencoder compression approach. To highlight the usefulness of the proposed approach, we combine the autoencoder compression with the data-informed active subspace (DIAS) prior to show how the DIAS method can be affordably extended to large scale problems without the need of checkpointing and large memory

    Lactation and neonatal nutrition: defining and refining the critical questions.

    Get PDF
    This paper resulted from a conference entitled "Lactation and Milk: Defining and refining the critical questions" held at the University of Colorado School of Medicine from January 18-20, 2012. The mission of the conference was to identify unresolved questions and set future goals for research into human milk composition, mammary development and lactation. We first outline the unanswered questions regarding the composition of human milk (Section I) and the mechanisms by which milk components affect neonatal development, growth and health and recommend models for future research. Emerging questions about how milk components affect cognitive development and behavioral phenotype of the offspring are presented in Section II. In Section III we outline the important unanswered questions about regulation of mammary gland development, the heritability of defects, the effects of maternal nutrition, disease, metabolic status, and therapeutic drugs upon the subsequent lactation. Questions surrounding breastfeeding practice are also highlighted. In Section IV we describe the specific nutritional challenges faced by three different populations, namely preterm infants, infants born to obese mothers who may or may not have gestational diabetes, and infants born to undernourished mothers. The recognition that multidisciplinary training is critical to advancing the field led us to formulate specific training recommendations in Section V. Our recommendations for research emphasis are summarized in Section VI. In sum, we present a roadmap for multidisciplinary research into all aspects of human lactation, milk and its role in infant nutrition for the next decade and beyond

    Genomic Arrangement of Regulons in Bacterial Genomes

    Get PDF
    Regulons, as groups of transcriptionally co-regulated operons, are the basic units of cellular response systems in bacterial cells. While the concept has been long and widely used in bacterial studies since it was first proposed in 1964, very little is known about how its component operons are arranged in a bacterial genome. We present a computational study to elucidate of the organizational principles of regulons in a bacterial genome, based on the experimentally validated regulons of E. coli and B. subtilis. Our results indicate that (1) genomic locations of transcriptional factors (TFs) are under stronger evolutionary constraints than those of the operons they regulate so changing a TF's genomic location will have larger impact to the bacterium than changing the genomic position of any of its target operons; (2) operons of regulons are generally not uniformly distributed in the genome but tend to form a few closely located clusters, which generally consist of genes working in the same metabolic pathways; and (3) the global arrangement of the component operons of all the regulons in a genome tends to minimize a simple scoring function, indicating that the global arrangement of regulons follows simple organizational principles

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites
    • …
    corecore